Self-organizing state aggregation for architecture design of Q-learning

نویسندگان

  • Kao-Shing Hwang
  • Hsin-Yi Lin
  • Yuan-Pao Hsu
  • Hung-Hsiu Yu
چکیده

This work describes a novel algorithm that integrates an adaptive resonance method (ARM), i.e. an ART-based algorithm with a self-organized design, and a Q-learning algorithm. By dynamically adjusting the size of sensitivity regions of each neuron and adaptively eliminating one of the redundant neurons, ARM can preserve resources, i.e. available neurons, to accommodate additional categories. As a dynamic programmingbased reinforcement learning method, Q-learning involves use of the learned action-value function, Q, which directly approximates Q⁄, i.e. the optimal action-value function, which is independent of the policy followed. In the proposed algorithm, ARM functions as a cluster to classify input vectors from the outside world. Clustered results are then sent to the Q-learning design in order to learn how to implement the optimum actions to the outside world. Simulation results of the well-known control algorithm of balancing an inverted pendulum on a cart demonstrates the effectiveness of the proposed algorithm. 2011 Elsevier Inc. All rights reserved.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Combination of Reinforcement Learning and Dynamic Self Organizing Map for Robot Arm Control

This paper shows that a system with two link arm can obtain arm reaching movement to a target object by combination of reinforcement learning and dynamic self organizing map. Proposed model in this paper present state and action space of reinforcement learning with dynamis self organizing maps. Because these spaces are continuous. proposed model uses two dynamic self-organizing maps (DSOM) to e...

متن کامل

Based on A* and Q-Learning Search and Rescue Robot Navigation

For the search and rescue robot navigation in unknown environment, a bionic self-learning algorithm based on A* and Q-Learning is put forward. The algorithm utilizes the Growing Self-organizing Map (GSOM) to build the environment topology cognitive map. The heuristic search A* algorithm is used the global path planning. When the local environment changes, Q-Learning is used the local path plann...

متن کامل

Self-organizing Neural Architecture for Reinforcement Learning

Self-organizing neural networks are typically associated with unsupervised learning. This paper presents a self-organizing neural architecture, known as TD-FALCON, that learns cognitive codes across multi-modal pattern spaces, involving states, actions, and rewards, and is capable of adapting and functioning in a dynamic environment with external evaluative feedback signals. We present a case s...

متن کامل

Ecient Q-learning by Division of Labor

Q-learning as well as other learning paradigms depend strongly on the representation of the underlying state space. As a special case of the hidden state problem we investigate the e ect of a self-organizing discretization of the state space in a simple control problem. We apply the neural gas algorithm with adaptation of learning rate and neighborhood range to a simulated cart-pole problem. Th...

متن کامل

Sensory-based Robot Navigation Using Self-organizing Networks and Q-learning

We present a rapidly learning neural control architecture for sensory-based navigation of a mobile robot and compare the learning dynamics and the navigation behavior in the context of diierent implemented network approaches and learning schemes. Our control architecture is a combination of i) alternative vector quantiza-tion techniques (Neural gas and Kohonen feature map) for optimal clusterin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Inf. Sci.

دوره 181  شماره 

صفحات  -

تاریخ انتشار 2011